20 research outputs found
Sketch-Guided Text-to-Image Diffusion Models
Text-to-Image models have introduced a remarkable leap in the evolution of
machine learning, demonstrating high-quality synthesis of images from a given
text-prompt. However, these powerful pretrained models still lack control
handles that can guide spatial properties of the synthesized images. In this
work, we introduce a universal approach to guide a pretrained text-to-image
diffusion model, with a spatial map from another domain (e.g., sketch) during
inference time. Unlike previous works, our method does not require to train a
dedicated model or a specialized encoder for the task. Our key idea is to train
a Latent Guidance Predictor (LGP) - a small, per-pixel, Multi-Layer Perceptron
(MLP) that maps latent features of noisy images to spatial maps, where the deep
features are extracted from the core Denoising Diffusion Probabilistic Model
(DDPM) network. The LGP is trained only on a few thousand images and
constitutes a differential guiding map predictor, over which the loss is
computed and propagated back to push the intermediate images to agree with the
spatial map. The per-pixel training offers flexibility and locality which
allows the technique to perform well on out-of-domain sketches, including
free-hand style drawings. We take a particular focus on the sketch-to-image
translation task, revealing a robust and expressive way to generate images that
follow the guidance of a sketch of arbitrary style or domain. Project page:
sketch-guided-diffusion.github.i
: Extended Textual Conditioning in Text-to-Image Generation
We introduce an Extended Textual Conditioning space in text-to-image models,
referred to as . This space consists of multiple textual conditions,
derived from per-layer prompts, each corresponding to a layer of the denoising
U-net of the diffusion model.
We show that the extended space provides greater disentangling and control
over image synthesis. We further introduce Extended Textual Inversion (XTI),
where the images are inverted into , and represented by per-layer tokens.
We show that XTI is more expressive and precise, and converges faster than
the original Textual Inversion (TI) space. The extended inversion method does
not involve any noticeable trade-off between reconstruction and editability and
induces more regular inversions.
We conduct a series of extensive experiments to analyze and understand the
properties of the new space, and to showcase the effectiveness of our method
for personalizing text-to-image models. Furthermore, we utilize the unique
properties of this space to achieve previously unattainable results in
object-style mixing using text-to-image models. Project page:
https://prompt-plus.github.i
Prompt-to-Prompt Image Editing with Cross Attention Control
Recent large-scale text-driven synthesis models have attracted much attention
thanks to their remarkable capabilities of generating highly diverse images
that follow given text prompts. Such text-based synthesis methods are
particularly appealing to humans who are used to verbally describe their
intent. Therefore, it is only natural to extend the text-driven image synthesis
to text-driven image editing. Editing is challenging for these generative
models, since an innate property of an editing technique is to preserve most of
the original image, while in the text-based models, even a small modification
of the text prompt often leads to a completely different outcome.
State-of-the-art methods mitigate this by requiring the users to provide a
spatial mask to localize the edit, hence, ignoring the original structure and
content within the masked region. In this paper, we pursue an intuitive
prompt-to-prompt editing framework, where the edits are controlled by text
only. To this end, we analyze a text-conditioned model in depth and observe
that the cross-attention layers are the key to controlling the relation between
the spatial layout of the image to each word in the prompt. With this
observation, we present several applications which monitor the image synthesis
by editing the textual prompt only. This includes localized editing by
replacing a word, global editing by adding a specification, and even delicately
controlling the extent to which a word is reflected in the image. We present
our results over diverse images and prompts, demonstrating high-quality
synthesis and fidelity to the edited prompts
MotioNet: 3D Human Motion Reconstruction from Video with Skeleton Consistency
We introduce MotioNet, a deep neural network that directly reconstructs the motion of a 3D human skeleton from monocular video. While previous methods rely on either rigging or inverse kinematics (IK) to associate a consistent skeleton with temporally coherent joint rotations, our method is the first data-driven approach that directly outputs a kinematic skeleton, which is a complete, commonly used, motion representation. At the crux of our approach lies a deep neural network with embedded kinematic priors, which decomposes sequences of 2D joint positions into two separate attributes: a single, symmetric, skeleton, encoded by bone lengths, and a sequence of 3D joint rotations associated with global root positions and foot contact labels. These attributes are fed into an integrated forward kinematics (FK) layer that outputs 3D positions, which are compared to a ground truth. In addition, an adversarial loss is applied to the velocities of the recovered rotations, to ensure that they lie on the manifold of natural joint rotations.
The key advantage of our approach is that it learns to infer natural joint rotations directly from the training data, rather than assuming an underlying model, or inferring them from joint positions using a data-agnostic IK solver. We show that enforcing a single consistent skeleton along with temporally coherent joint rotations constrains the solution space, leading to a more robust handling of self-occlusions and depth ambiguities.This work has been partly supported by the project that has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 739578 (RISE – Call: H2020-WIDESPREAD-01-2016-2017-TeamingPhase2) and the Government of the Republic of Cyprus through the Directorate General for European Programmes, Coordination and Development
HyperDreamBooth: HyperNetworks for Fast Personalization of Text-to-Image Models
Personalization has emerged as a prominent aspect within the field of
generative AI, enabling the synthesis of individuals in diverse contexts and
styles, while retaining high-fidelity to their identities. However, the process
of personalization presents inherent challenges in terms of time and memory
requirements. Fine-tuning each personalized model needs considerable GPU time
investment, and storing a personalized model per subject can be demanding in
terms of storage capacity. To overcome these challenges, we propose
HyperDreamBooth-a hypernetwork capable of efficiently generating a small set of
personalized weights from a single image of a person. By composing these
weights into the diffusion model, coupled with fast finetuning, HyperDreamBooth
can generate a person's face in various contexts and styles, with high subject
details while also preserving the model's crucial knowledge of diverse styles
and semantic modifications. Our method achieves personalization on faces in
roughly 20 seconds, 25x faster than DreamBooth and 125x faster than Textual
Inversion, using as few as one reference image, with the same quality and style
diversity as DreamBooth. Also our method yields a model that is 10000x smaller
than a normal DreamBooth model. Project page: https://hyperdreambooth.github.ioComment: project page: https://hyperdreambooth.github.i
Zoom-to-Inpaint: Image Inpainting with High-Frequency Details
Although deep learning has enabled a huge leap forward in image inpainting,
current methods are often unable to synthesize realistic high-frequency
details. In this paper, we propose applying super-resolution to coarsely
reconstructed outputs, refining them at high resolution, and then downscaling
the output to the original resolution. By introducing high-resolution images to
the refinement network, our framework is able to reconstruct finer details that
are usually smoothed out due to spectral bias - the tendency of neural networks
to reconstruct low frequencies better than high frequencies. To assist training
the refinement network on large upscaled holes, we propose a progressive
learning technique in which the size of the missing regions increases as
training progresses. Our zoom-in, refine and zoom-out strategy, combined with
high-resolution supervision and progressive learning, constitutes a
framework-agnostic approach for enhancing high-frequency details that can be
applied to any CNN-based inpainting method. We provide qualitative and
quantitative evaluations along with an ablation analysis to show the
effectiveness of our approach. This seemingly simple, yet powerful approach,
outperforms state-of-the-art inpainting methods